314 research outputs found

    Coronary fly-through or virtual angioscopy using dual-source MDCT data

    Get PDF
    Coronary fly-through or virtual angioscopy (VA) has been studied ever since its invention in 2000. However, application was limited because it requires an optimal computed tomography (CT) scan and time-consuming post-processing. Recent advances in post-processing software facilitate easy construction of VA, but until now image quality was insufficient in most patients. The introduction of dual-source multidetector CT (MDCT) could enable VA in all patients. Twenty patients were scanned using a dual-source MDCT (Definition, Siemens, Forchheim, Germany) using a standard coronary artery protocol. Post-processing was performed on an Aquarius Workstation (TeraRecon, San Mateo, Calif.). Length travelled per major branch was recorded in millimetres, together with the time required in minutes. VA could be performed in every patient for each of the major coronary arteries. The mean (range) length of the automated fly-through was 80 (32–107) mm for the left anterior descending (LAD), 75 (21–116) mm for the left circumflex artery (LCx), and 109 (21–190) mm for the right coronary artery (RCA). Calcifications and stenoses were visualised, as well as most side branches. The mean time required was 3 min for LAD, 2.5 min for LCx, and 2 min for the RCA. Dual-source MDCT allows for high quality visualisation of the coronary arteries in every patient because scanning with this machine is independent of the heart rate. This is clearly shown by the successful VA in all patients. Potential clinical value of VA should be determined in the near future

    Applications of artificial intelligence (AI) in diagnostic radiology:a technography study

    Get PDF
    Objectives: Why is there a major gap between the promises of AI and its applications in the domain of diagnostic radiology? To answer this question, we systematically review and critically analyze the AI applications in the radiology domain. Methods: We systematically analyzed these applications based on their focal modality and anatomic region as well as their stage of development, technical infrastructure, and approval. Results: We identified 269 AI applications in the diagnostic radiology domain, offered by 99 companies. We show that AI applications are primarily narrow in terms of tasks, modality, and anatomic region. A majority of the available AI functionalities focus on supporting the “perception” and “reasoning” in the radiology workflow. Conclusions: Thereby, we contribute by (1) offering a systematic framework for analyzing and mapping the technological developments in the diagnostic radiology domain, (2) providing empirical evidence regarding the landscape of AI applications, and (3) offering insights into the current state of AI applications. Accordingly, we discuss the potential impacts of AI applications on the radiology work and we highlight future possibilities for developing these applications. Key Points: • Many AI applications are introduced to the radiology domain and their number and diversity grow very fast. • Most of the AI applications are narrow in terms of modality, body part, and pathology. • A lot of applications focus on supporting “perception” and “reasoning” tasks

    Automatic Pulmonary Nodule Detection in CT Scans Using Convolutional Neural Networks Based on Maximum Intensity Projection

    Get PDF
    Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximum intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.Comment: Submitted to IEEE TM

    The natural language processing of radiology requests and reports of chest imaging:Comparing five transformer models’ multilabel classification and a proof-of-concept study

    Get PDF
    Background: Radiology requests and reports contain valuable information about diagnostic findings and indications, and transformer-based language models are promising for more accurate text classification. Methods: In a retrospective study, 2256 radiologist-annotated radiology requests (8 classes) and reports (10 classes) were divided into training and testing datasets (90% and 10%, respectively) and used to train 32 models. Performance metrics were compared by model type (LSTM, Bertje, RobBERT, BERT-clinical, BERT-multilingual, BERT-base), text length, data prevalence, and training strategy. The best models were used to predict the remaining 40,873 cases’ categories of the datasets of requests and reports. Results: The RobBERT model performed the best after 4000 training iterations, resulting in AUC values ranging from 0.808 [95% CI (0.757–0.859)] to 0.976 [95% CI (0.956–0.996)] for the requests and 0.746 [95% CI (0.689–0.802)] to 1.0 [95% CI (1.0–1.0)] for the reports. The AUC for the classification of normal reports was 0.95 [95% CI (0.922–0.979)]. The predicted data demonstrated variability of both diagnostic yield for various request classes and request patterns related to COVID-19 hospital admission data. Conclusion: Transformer-based natural language processing is feasible for the multilabel classification of chest imaging request and report items. Diagnostic yield varies with the information in the requests
    • …
    corecore